164 research outputs found

    PasMoQAP: A Parallel Asynchronous Memetic Algorithm for solving the Multi-Objective Quadratic Assignment Problem

    Full text link
    Multi-Objective Optimization Problems (MOPs) have attracted growing attention during the last decades. Multi-Objective Evolutionary Algorithms (MOEAs) have been extensively used to address MOPs because are able to approximate a set of non-dominated high-quality solutions. The Multi-Objective Quadratic Assignment Problem (mQAP) is a MOP. The mQAP is a generalization of the classical QAP which has been extensively studied, and used in several real-life applications. The mQAP is defined as having as input several flows between the facilities which generate multiple cost functions that must be optimized simultaneously. In this study, we propose PasMoQAP, a parallel asynchronous memetic algorithm to solve the Multi-Objective Quadratic Assignment Problem. PasMoQAP is based on an island model that structures the population by creating sub-populations. The memetic algorithm on each island individually evolve a reduced population of solutions, and they asynchronously cooperate by sending selected solutions to the neighboring islands. The experimental results show that our approach significatively outperforms all the island-based variants of the multi-objective evolutionary algorithm NSGA-II. We show that PasMoQAP is a suitable alternative to solve the Multi-Objective Quadratic Assignment Problem.Comment: 8 pages, 3 figures, 2 tables. Accepted at Conference on Evolutionary Computation 2017 (CEC 2017

    Optimizing Production Schedule with Energy Consumption and Demand Charges in Parallel Machine Setting

    Get PDF
    Environmental sustainability concerns, along with the growing need for electricity and associated costs, make energy-cost reduction an inevitable decision-making criterion in production scheduling. In this research, we study the problem of production scheduling on nonidentical parallel machines with machine-dependent processing times and known job release dates to minimize total completion time and energy costs. The energy costs in this study include demand and consumption charges. We present a mixed-integer nonlinear model to formulate the problem. The model is then linearized and its performance is tested through numerical experiments

    Quantifying the regeneration of bone tissue in biomedical images via Legendre moments

    Get PDF
    Artículo publicado en los proceedings del congresoWe investigate the use of Legendre moments as biomarkers for an efficient and accurate classification of bone tissue on images coming from stem cell regeneration studies. Regions of either existing bone, cartilage or new bone-forming cells are characterized at tile level to quantify the degree of bone regeneration depending on culture conditions. Legendre moments are analyzed from three different perspectives: (1) their discriminant properties in a wide set of preselected vectors of features based on our clinical and computational experience, providing solutions whose accuracy exceeds 90%. (2) the amount of information to be retained when using Principal Component Analysis (PCA) to reduce the dimensionality of the problem from 2 to 6 dimensions. (3) the use of the (alpha-beta)-k-feature set problem to identify a k=4 number of features which are more relevant to our analysis from a combinatorial optimization approach. These techniques are compared in terms of computational complexity and classification accuracy to assess the strengths and limitations of the use of Legendre moments for this biomedical image processing application.Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech

    Uncovering Molecular Biomarkers That Correlate Cognitive Decline with the Changes of Hippocampus' Gene Expression Profiles in Alzheimer's Disease

    Get PDF
    Background: Alzheimer’s disease (AD) is characterized by a neurodegenerative progression that alters cognition. On a phenotypical level, cognition is evaluated by means of the MiniMental State Examination (MMSE) and the post-morten examination of Neurofibrillary Tangle count (NFT) helps to confirm an AD diagnostic. The MMSE evaluates different aspects of cognition including orientation, short-term memory (retention and recall), attention and language. As there is a normal cognitive decline with aging, and death is the final state on which NFT can be counted, the identification of brain gene expression biomarkers from these phenotypical measures has been elusive. Methodology/Principal Findings: We have reanalysed a microarray dataset contributed in 2004 by Blalock et al. of 31 samples corresponding to hippocampus gene expression from 22 AD subjects of varying degree of severity and 9 controls. Instead of only relying on correlations of gene expression with the associated MMSE and NFT measures, and by using modern bioinformatics methods based on information theory and combinatorial optimization, we uncovered a 1,372-probe gene expression signature that presents a high-consensus with established markers of progression in AD. The signature reveals alterations in calcium, insulin, phosphatidylinositol and wnt-signalling. Among the most correlated gene probes with AD severity we found those linked to synaptic function, neurofilament bundle assembly and neuronal plasticity. Conclusions/Significance: A transcription factors analysis of 1,372-probe signature reveals significant associations with the EGR/KROX family of proteins, MAZ, and E2F1. The gene homologous of EGR1, zif268, Egr-1 or Zenk, together with other members of the EGR family, are consolidating a key role in the neuronal plasticity in the brain. These results indicate a degree of commonality between putative genes involved in AD and prion-induced neurodegenerative processes that warrants further investigation

    Iteratively refining breast cancer intrinsic subtypes in the METABRIC dataset

    Get PDF
    BACKGROUND: Multi-gene lists and single sample predictor models have been currently used to reduce the multidimensional complexity of breast cancers, and to identify intrinsic subtypes. The perceived inability of some models to deal with the challenges of processing high-dimensional data, however, limits the accurate characterisation of these subtypes. Towards the development of robust strategies, we designed an iterative approach to consistently discriminate intrinsic subtypes and improve class prediction in the METABRIC dataset. FINDINGS: In this study, we employed the CM1 score to identify the most discriminative probes for each group, and an ensemble learning technique to assess the ability of these probes on assigning subtype labels using 24 different classifiers. Our analysis is comprised of an iterative computation of these methods and statistical measures performed on a set of over 2000 samples. The refined labels assigned using this iterative approach revealed to be more consistent and in better agreement with clinicopathological markers and patients' overall survival than those originally provided by the PAM50 method. CONCLUSIONS: The assignment of intrinsic subtypes has a significant impact in translational research for both understanding and managing breast cancer. The refined labelling, therefore, provides more accurate and reliable information by improving the source of fundamental science prior to clinical applications in medicine

    Computing Large-scale Distance Matrices on GPU

    Get PDF
    Abstract-A distance matrix is simply an n×n two-dimensional array that contains pairwise distances of a set of n points in a metric space. It has a wide range of usage in several fields of scientific research e.g., data clustering, machine learning, pattern recognition, image analysis, information retrieval, signal processing, bioinformatics etc. However, as the size of n increases, the computation of distance matrix becomes very slow or incomputable on traditional general purpose computers. In this paper, we propose an inexpensive and scalable data-parallel solution to this problem by dividing the computational tasks and data on GPUs. We demonstrate the performance of our method on a set of real-world biological networks constructed from a renowned breast cancer study

    An automatic graph layout procedure to visualize correlated data

    Get PDF
    This paper introduces an automatic procedure to assist on the interpretation of a large dataset when a similarity metric is available. We propose a visualization approach based on a graph layout method- ology that uses a Quadratic Assignment Problem (QAP) formulation. The methodology is presented using as testbed a time series dataset of the Standard & Poor’s 100, one the leading stock market indicators in the United States. A weighted graph is created with the stocks repre- sented by the nodes and the edges’ weights are related to the correlation between the stocks’ time series. A heuristic for clustering is then pro- posed; it is based on the graph partition into disconnected subgraphs allowing the identification of clusters of highly-correlated stocks. The final layout corresponds well with the perceived market notion of the different industrial sectors. We compare the output of this procedure with a traditional dendogram approach of hierarchical clusteringIFIP International Conference on Artificial Intelligence in Theory and Practice - Knowledge Acquisition and Data MiningRed de Universidades con Carreras en Informática (RedUNCI

    QAPgrid: A Two Level QAP-Based Approach for Large-Scale Data Analysis and Visualization

    Get PDF
    Background: The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain “hidden regularities” and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. Methodology/Principal Findings: We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Conclusions/Significance: Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics

    Analysis of the RLMS Adaptive Beamforming Algorithm Implemented with Finite Precision

    Get PDF
    This paper studies the influence of the use of finite wordlength on the operation of the RLMS adaptive beamformingalgorithm. The convergence behavior of RLMS, based on the minimum mean square error (MSE), is analyzed for operation with finite precision. Computer simulation results verify that a wordlength of nine bits is sufficient for the RLMS algorithm to achieve performance close to that provided by full precision. The performance measures used include residual MSE, rate of convergence, error vector magnitude (EVM), and beam pattern. Based on all these measures, it is shown that the RLMS algorithm outperforms other earlier algorithms, such as least mean square (LMS), recursive least square (RLS), modified robust variable step size (MRVSS) and constrained stability LMS (CSLMS)
    corecore